29 research outputs found

    Spiking LCA in a Neural Circuit with Dictionary Learning and Synaptic Normalization

    Get PDF
    The Locally Competitive Algorithm (LCA) [17, 18] was put forward as a model of primary visual cortex [14, 17] and has been used extensively as a sparse coding algorithm for multivariate data. LCA has seen implementations on neuromorphic processors, including IBM’s TrueNorth processor [10], and Intel’s neuromorphic research processor, Loihi, which show that it can be very efficient with respect to the power resources it consumes [8]. When combined with dictionary learning [13], the LCA algorithm encounters synaptic instability [24], where, as a synapse’s strength grows, its activity increases, further enhancing synaptic strength, leading to a runaway condition, where synapses become saturated [3, 15]. A number of approaches have been suggested to stabilize this phenomenon [1, 2, 5, 7, 12]. Previous work demonstrated that, by extending the cost function used to generate LCA updates, synaptic normalization could be achieved, eliminating synaptic runaway [7]. It was also shown that the resulting algorithm could be implemented in a firing rate model [7]. Here, we implement a probabilistic approximation to this firing rate model as a spiking LCA algorithm that includes dictionary learning and synaptic normalization. The algorithm is based on a synfire-gated synfire chain-based information control network in concert with Hebbian synapses [16, 19]. We show that this algorithm results in correct classification on numeric data taken from the MNIST datase

    Event-driven Vision and Control for UAVs on a Neuromorphic Chip

    Full text link
    Event-based vision sensors achieve up to three orders of magnitude better speed vs. power consumption trade off in high-speed control of UAVs compared to conventional image sensors. Event-based cameras produce a sparse stream of events that can be processed more efficiently and with a lower latency than images, enabling ultra-fast vision-driven control. Here, we explore how an event-based vision algorithm can be implemented as a spiking neuronal network on a neuromorphic chip and used in a drone controller. We show how seamless integration of event-based perception on chip leads to even faster control rates and lower latency. In addition, we demonstrate how online adaptation of the SNN controller can be realised using on-chip learning. Our spiking neuronal network on chip is the first example of a neuromorphic vision-based controller on chip solving a high-speed UAV control task. The excellent scalability of processing in neuromorphic hardware opens the possibility to solve more challenging visual tasks in the future and integrate visual perception in fast control loops

    Pose Estimation and Map Formation with Spiking Neural Networks: towards Neuromorphic SLAM

    Full text link
    In this paper, we investigate the use of ultra low-power, mixed signal analog/digital neuromorphic hardware for implementation of biologically inspired neuronal path integration and map formation for a mobile robot. We perform spiking network simulations of the developed architecture, interfaced to a simulated robotic vehicle. We then port the neuronal map formation architecture on two connected neuromorphic devices, one of which features on-board plasticity, and demonstrate the feasibility of a neuromorphic realization of simultaneous localization and mapping (SLAM)

    An On-chip Spiking Neural Network for Estimation of the Head Pose of the iCub Robot

    Get PDF
    In this work, we present a neuromorphic architecture for head pose estimation and scene representation for the humanoid iCub robot. The spiking neuronal network is fully realized in Intel's neuromorphic research chip, Loihi, and precisely integrates the issued motor commands to estimate the iCub's head pose in a neuronal path-integration process. The neuromorphic vision system of the iCub is used to correct for drift in the pose estimation. Positions of objects in front of the robot are memorized using on-chip synaptic plasticity. We present real-time robotic experiments using 2 degrees of freedom (DoF) of the robot's head and show precise path integration, visual reset, and object position learning on-chip. We discuss the requirements for integrating the robotic system and neuromorphic hardware with current technologies

    Neuromorphic Visual Odometry with Resonator Networks

    Full text link
    Autonomous agents require self-localization to navigate in unknown environments. They can use Visual Odometry (VO) to estimate self-motion and localize themselves using visual sensors. This motion-estimation strategy is not compromised by drift as inertial sensors or slippage as wheel encoders. However, VO with conventional cameras is computationally demanding, limiting its application in systems with strict low-latency, -memory, and -energy requirements. Using event-based cameras and neuromorphic computing hardware offers a promising low-power solution to the VO problem. However, conventional algorithms for VO are not readily convertible to neuromorphic hardware. In this work, we present a VO algorithm built entirely of neuronal building blocks suitable for neuromorphic implementation. The building blocks are groups of neurons representing vectors in the computational framework of Vector Symbolic Architecture (VSA) which was proposed as an abstraction layer to program neuromorphic hardware. The VO network we propose generates and stores a working memory of the presented visual environment. It updates this working memory while at the same time estimating the changing location and orientation of the camera. We demonstrate how VSA can be leveraged as a computing paradigm for neuromorphic robotics. Moreover, our results represent an important step towards using neuromorphic computing hardware for fast and power-efficient VO and the related task of simultaneous localization and mapping (SLAM). We validate this approach experimentally in a simple robotic task and with an event-based dataset, demonstrating state-of-the-art performance in these settings.Comment: 14 pages, 5 figures, minor change

    Neuromorphic Visual Scene Understanding with Resonator Networks

    Full text link
    Inferring the position of objects and their rigid transformations is still an open problem in visual scene understanding. Here we propose a neuromorphic solution that utilizes an efficient factorization network based on three key concepts: (1) a computational framework based on Vector Symbolic Architectures (VSA) with complex-valued vectors; (2) the design of Hierarchical Resonator Networks (HRN) to deal with the non-commutative nature of translation and rotation in visual scenes, when both are used in combination; (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued vector binding on neuromorphic hardware. The VSA framework uses vector binding operations to produce generative image models in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which in turn can be efficiently factorized by a resonator network to infer objects and their poses. The HRN enables the definition of a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition and for rotation and scaling within the other partition. The spiking neuron model allows mapping the resonator network onto efficient and low-power neuromorphic hardware. In this work, we demonstrate our approach using synthetic scenes composed of simple 2D shapes undergoing rigid geometric transformations and color changes. A companion paper demonstrates this approach in real-world application scenarios for machine vision and robotics.Comment: 15 pages, 6 figures, minor change
    corecore